145 research outputs found

    Models and Algorithms for Graph Watermarking

    Full text link
    We introduce models and algorithmic foundations for graph watermarking. Our frameworks include security definitions and proofs, as well as characterizations when graph watermarking is algorithmically feasible, in spite of the fact that the general problem is NP-complete by simple reductions from the subgraph isomorphism or graph edit distance problems. In the digital watermarking of many types of files, an implicit step in the recovery of a watermark is the mapping of individual pieces of data, such as image pixels or movie frames, from one object to another. In graphs, this step corresponds to approximately matching vertices of one graph to another based on graph invariants such as vertex degree. Our approach is based on characterizing the feasibility of graph watermarking in terms of keygen, marking, and identification functions defined over graph families with known distributions. We demonstrate the strength of this approach with exemplary watermarking schemes for two random graph models, the classic Erd\H{o}s-R\'{e}nyi model and a random power-law graph model, both of which are used to model real-world networks

    On the Gold Standard for Security of Universal Steganography

    Get PDF
    While symmetric-key steganography is quite well understood both in the information-theoretic and in the computational setting, many fundamental questions about its public-key counterpart resist persistent attempts to solve them. The computational model for public-key steganography was proposed by von Ahn and Hopper in EUROCRYPT 2004. At TCC 2005, Backes and Cachin gave the first universal public-key stegosystem - i.e. one that works on all channels - achieving security against replayable chosen-covertext attacks (SS-RCCA) and asked whether security against non-replayable chosen-covertext attacks (SS-CCA) is achievable. Later, Hopper (ICALP 2005) provided such a stegosystem for every efficiently sampleable channel, but did not achieve universality. He posed the question whether universality and SS-CCA-security can be achieved simultaneously. No progress on this question has been achieved since more than a decade. In our work we solve Hopper's problem in a somehow complete manner: As our main positive result we design an SS-CCA-secure stegosystem that works for every memoryless channel. On the other hand, we prove that this result is the best possible in the context of universal steganography. We provide a family of 0-memoryless channels - where the already sent documents have only marginal influence on the current distribution - and prove that no SS-CCA-secure steganography for this family exists in the standard non-look-ahead model.Comment: EUROCRYPT 2018, llncs styl

    High nutrient-use efficiency during early seedling growth in diverse Grevillea species (Proteaceae)

    Get PDF
    Several hypotheses have been proposed to explain the rich floristic diversity in regions characterised by nutrient-impoverished soils; however, none of these hypotheses have been able to explain the rapid diversification over a relatively short evolutionary time period of Grevillea, an Australian plant genus with 452 recognised species/subspecies and only 11 million years of evolutionary history. Here, we hypothesise that the apparent evolutionary success of Grevillea might have been triggered by the highly efficient use of key nutrients. The nutrient content in the seeds and nutrient-use efficiency during early seedling growth of 12 species of Grevillea were compared with those of 24 species of Hakea, a closely related genus. Compared with Hakea, the Grevillea species achieved similar growth rates (root and shoot length) during the early stages of seedling growth but contained only approximately half of the seed nutrient content. We conclude that the high nutrient-use efficiency observed in Grevillea might have provided a selective advantage in nutrient-poor ecosystems during evolution and that this property likely contributed to the evolutionary success in Grevillea

    A Mathematical model for Astrocytes mediated LTP at Single Hippocampal Synapses

    Full text link
    Many contemporary studies have shown that astrocytes play a significant role in modulating both short and long form of synaptic plasticity. There are very few experimental models which elucidate the role of astrocyte over Long-term Potentiation (LTP). Recently, Perea & Araque (2007) demonstrated a role of astrocytes in induction of LTP at single hippocampal synapses. They suggested a purely pre-synaptic basis for induction of this N-methyl-D- Aspartate (NMDA) Receptor-independent LTP. Also, the mechanisms underlying this pre-synaptic induction were not investigated. Here, in this article, we propose a mathematical model for astrocyte modulated LTP which successfully emulates the experimental findings of Perea & Araque (2007). Our study suggests the role of retrograde messengers, possibly Nitric Oxide (NO), for this pre-synaptically modulated LTP.Comment: 51 pages, 15 figures, Journal of Computational Neuroscience (to appear

    LPN Decoded

    Get PDF
    We propose new algorithms with small memory consumption for the Learning Parity with Noise (LPN) problem, both classically and quantumly. Our goal is to predict the hardness of LPN depending on both parameters, its dimension kk and its noise rate τ\tau, as accurately as possible both in theory and practice. Therefore, we analyze our algorithms asymptotically, run experiments on medium size parameters and provide bit complexity predictions for large parameters. Our new algorithms are modifications and extensions of the simple Gaussian elimination algorithm with recent advanced techniques for decoding random linear codes. Moreover, we enhance our algorithms by the dimension reduction technique from Blum, Kalai, Wasserman. This results in a hybrid algorithm that is capable for achieving the best currently known run time for any fixed amount of memory. On the asymptotic side, we achieve significant improvements for the run time exponents, both classically and quantumly. To the best of our knowledge, we provide the first quantum algorithms for LPN. Due to the small memory consumption of our algorithms, we are able to solve for the first time LPN instances of medium size, e.g. with k=243,τ=18k=243, \tau = \frac 1 8 in only 15 days on 64 threads. Our algorithms result in bit complexity prediction that require relatively large kk for small τ\tau. For instance for small noise LPN with τ=1k\tau= \frac 1 {\sqrt k}, we predict 8080-bit classical and only 6464-bit quantum security for k  2048k~\geq~2048. For the common cryptographic choice k=512,τ=18k=512, \tau = \frac 1 8, we achieve with limited memory classically 9797-bit and quantumly 7070-bit security

    On Iterative Collision Search for LPN and Subset Sum

    Get PDF
    Iterative collision search procedures play a key role in developing combinatorial algorithms for the subset sum and learning parity with noise (LPN) problems. In both scenarios, the single-list pair-wise iterative collision search finds the most solutions and offers the best efficiency. However, due to its complex probabilistic structure, no rigorous analysis for it appears to be available to the best of our knowledge. As a result, theoretical works often resort to overly constrained and sub-optimal iterative collision search variants in exchange for analytic simplicity. In this paper, we present rigorous analysis for the single-list pair-wise iterative collision search method and its applications in subset sum and LPN. In the LPN literature, the method is known as the LF2 heuristic. Besides LF2, we also present rigorous analysis of other LPN solving heuristics and show that they work well when combined with LF2. Putting it together, we significantly narrow the gap between theoretical and heuristic algorithms for LPN

    Cliptography: Clipping the Power of Kleptographic Attacks

    Get PDF
    Kleptography, introduced 20 years ago by Young and Yung [Crypto ’96], considers the (in)security of malicious implementations (or instantiations) of standard cryptographic prim- itives that embed a “backdoor” into the system. Remarkably, crippling subliminal attacks are possible even if the subverted cryptosystem produces output indistinguishable from a truly secure “reference implementation.” Bellare, Paterson, and Rogaway [Crypto ’14] recently initiated a formal study of such attacks on symmetric key encryption algorithms, demonstrating a kleptographic attack can be mounted in broad generality against randomized components of cryptographic systems. We enlarge the scope of current work on the problem by permitting adversarial subversion of (randomized) key generation; in particular, we initiate the study of cryptography in the complete subversion model, where all relevant cryptographic primitives are subject to kleptographic attacks. We construct secure one-way permutations and trapdoor one-way permutations in this “complete subversion” model, describing a general, rigorous immunization strategy to clip the power of kleptographic subversions. Our strategy can be viewed as a formal treatment of the folklore “nothing up my sleeve” wisdom in cryptographic practice. We also describe a related “split program” model that can directly inform practical deployment. We additionally apply our general immunization strategy to directly yield a backdoor-free PRG. This notably amplifies previous results of Dodis, Ganesh, Golovnev, Juels, and Ristenpart [Eurocrypt ’15], which require an honestly generated random key. We then examine two standard applications of (trapdoor) one-way permutations in this complete subversion model and construct “higher level” primitives via black-box reductions. We showcase a digital signature scheme that preserves existential unforgeability when all algorithms (including key generation, which was not considered to be under attack before) are subject to kleptographic attacks. Additionally, we demonstrate that the classic Blum– Micali pseudorandom generator (PRG), using an “immunized” one-way permutation, yields a backdoor-free PRG. Alongside development of these secure primitives, we set down a hierarchy of kleptographic attack models which we use to organize past results and our new contributions; this taxonomy may be valuable for future work

    Genome-wide linkage analysis of 1,233 prostate cancer pedigrees from the International Consortium for prostate cancer Genetics using novel sumLINK and sumLOD analyses

    Full text link
    BACKGROUND Prostate cancer (PC) is generally believed to have a strong inherited component, but the search for susceptibility genes has been hindered by the effects of genetic heterogeneity. The recently developed sumLINK and sumLOD statistics are powerful tools for linkage analysis in the presence of heterogeneity. METHODS We performed a secondary analysis of 1,233 PC pedigrees from the International Consortium for Prostate Cancer Genetics (ICPCG) using two novel statistics, the sumLINK and sumLOD. For both statistics, dominant and recessive genetic models were considered. False discovery rate (FDR) analysis was conducted to assess the effects of multiple testing. RESULTS Our analysis identified significant linkage evidence at chromosome 22q12, confirming previous findings by the initial conventional analyses of the same ICPCG data. Twelve other regions were identified with genome-wide suggestive evidence for linkage. Seven regions (1q23, 5q11, 5q35, 6p21, 8q12, 11q13, 20p11–q11) are near loci previously identified in the initial ICPCG pooled data analysis or the subset of aggressive PC pedigrees. Three other regions (1p12, 8p23, 19q13) confirm loci reported by others, and two (2p24, 6q27) are novel susceptibility loci. FDR testing indicates that over 70% of these results are likely true positive findings. Statistical recombinant mapping narrowed regions to an average of 9 cM. CONCLUSIONS Our results represent genomic regions with the greatest consistency of positive linkage evidence across a very large collection of high-risk PC pedigrees using new statistical tests that deal powerfully with heterogeneity. These regions are excellent candidates for further study to identify PC predisposition genes. Prostate 70: 735–744, 2010. © 2010 Wiley-Liss, Inc.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/71371/1/21106_ftp.pd

    Faster Algorithms for Solving LPN

    Get PDF
    The LPN problem, lying at the core of many cryptographic constructions for lightweight and post-quantum cryptography, receives quite a lot attention recently. The best published algorithm for solving it at Asiacrypt 2014 improved the classical BKW algorithm by using covering codes, which claimed to marginally compromise the 8080-bit security of HB variants, LPN-C and Lapin. In this paper, we develop faster algorithms for solving LPN based on an optimal precise embedding of cascaded concrete perfect codes, in a similar framework but with many optimizations. Our algorithm outperforms the previous methods for the proposed parameter choices and distinctly break the 80-bit security bound of the instances suggested in cryptographic schemes like HB+^+, HB#^\#, LPN-C and Lapin

    On Public Key Encryption from Noisy Codewords

    Get PDF
    Several well-known public key encryption schemes, including those of Alekhnovich (FOCS 2003), Regev (STOC 2005), and Gentry, Peikert and Vaikuntanathan (STOC 2008), rely on the conjectured intractability of inverting noisy linear encodings. These schemes are limited in that they either require the underlying field to grow with the security parameter, or alternatively they can work over the binary field but have a low noise entropy that gives rise to sub-exponential attacks. Motivated by the goal of efficient public key cryptography, we study the possibility of obtaining improved security over the binary field by using different noise distributions. Inspired by an abstract encryption scheme of Micciancio (PKC 2010), we consider an abstract encryption scheme that unifies all the three schemes mentioned above and allows for arbitrary choices of the underlying field and noise distributions. Our main result establishes an unexpected connection between the power of such encryption schemes and additive combinatorics. Concretely, we show that under the ``approximate duality conjecture from additive combinatorics (Ben-Sasson and Zewi, STOC 2011), every instance of the abstract encryption scheme over the binary field can be attacked in time 2O(n)2^{O(\sqrt{n})}, where nn is the maximum of the ciphertext size and the public key size (and where the latter excludes public randomness used for specifying the code). On the flip side, counter examples to the above conjecture (if false) may lead to candidate public key encryption schemes with improved security guarantees. We also show, using a simple argument that relies on agnostic learning of parities (Kalai, Mansour and Verbin, STOC 2008), that any such encryption scheme can be {\em unconditionally} attacked in time 2O(n/logn)2^{O(n/\log n)}, where nn is the ciphertext size. Combining this attack with the security proof of Regev\u27s cryptosystem, we immediately obtain an algorithm that solves the {\em learning parity with noise (LPN)} problem in time 2O(n/loglogn)2^{O(n/\log \log n)} using only n1+ϵn^{1+\epsilon} samples, reproducing the result of Lyubashevsky (Random 2005) in a conceptually different way. Finally, we study the possibility of instantiating the abstract encryption scheme over constant-size rings to yield encryption schemes with no decryption error. We show that over the binary field decryption errors are inherent. On the positive side, building on the construction of matching vector families (Grolmusz, Combinatorica 2000; Efremenko, STOC 2009; Dvir, Gopalan and Yekhanin, FOCS 2010), we suggest plausible candidates for secure instances of the framework over constant-size rings that can offer perfectly correct decryption
    corecore